78 research outputs found

    Analysis of Neural Networks in Terms of Domain Functions

    Get PDF
    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a mysterious "black box". Although much research has already been done to "open the box," there is a notable hiatus in known publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods have been used to analyze neural networks. However, these can only be applied in a limited subset of the problem domains where neural network solutions are encountered. In this paper we propose a wider applicable method which, for a given problem domain, involves identifying basic functions with which users in that domain are already familiar, and describing trained neural networks, or parts thereof, in terms of those basic functions. This will provide a comprehensible description of the neural network's function and, depending on the chosen base functions, it may also provide an insight into the neural network' s inner "reasoning." It could further be used to optimize neural network systems. An analysis in terms of base functions may even make clear how to (re)construct a superior system using those base functions, thus using the neural network as a construction advisor

    On the Analysis of Neural Networks for Image Processing

    Get PDF
    This paper illustrates a novel method to analyze artificial neural networks so as to gain insight into their internal functionality. To this purpose, we will show analysis results of some feed-forwardĀæerror-back-propagation neural networks for image processing. We will describe them in terms of domain-dependent basic functions, which are, in the case of the digital image processing domain, differential operators of various orders and with various angles of operation. Some other pixel classification techniques are analyzed in the same way, enabling easy comparison

    Analysis of Neural Networks for Edge Detection

    Get PDF
    This paper illustrates a novel method to analyze artificial neural networks so as to gain insight into their internal functionality. To this purpose, the elements of a feedforward-backpropagation neural network, that has been trained to detect edges in images, are described in terms of differential operators of various orders and with various angles of operation

    Translating Feedforward Neural Nets to SOM-like Maps

    Get PDF
    A major disadvantage of feedforward neural networks is still the difficulty to gain insight into their internal functionality. This is much less the case for, e.g., nets that are trained unsupervised, such as KohonenĀæs self-organizing feature maps (SOMs). These offer a direct view into the stored knowledge, as their internal knowledge is stored in the same format as the input data that was used for training or is used for evaluation. This paper discusses a mathematical transformation of a feed-forward network into a SOMlike structure such that its internal knowledge can be visually interpreted. This is particularly applicable to networks trained in the general classification problem domain

    Ensuring Safety in Distributed Networks

    No full text
    Large-scale distributed networks are not easy to control centrally. Local set-point variations will likely create flooding alarm fluctuations. However, local control can cause emergence of unwanted network behavior. The paper overviews the problems with model-based control in sensory networks, such as currently rolled out for the Electric Grid, and shows how networked intelligence can bridge between central and distributed control

    Early detection of abnormal emergent behaviour

    No full text
    Emergent behaviour has become a plague of automation systems based on communication networks. Centralized monitoring of the network comes generally to late to sup-press unwanted behaviour. It is required to mark the ten-dency towards state changes in a decentralized manner. The paper discusses the role of local awareness by inspection of the model learning behaviour of feed-forward networks. The correlated movement of weight changes over time provides a clear indication of such profound changes, as demonstrated by some initial experience in industrial automation

    <title>Self-similar module for FP/LNS arithmetic in high-performance FPGA systems</title>

    No full text
    The scientific community has gratefully embraced floating-point arithmetic to escape the close attention for accuracy and precision required in fixed-point computational styles. Though its deficiencies are well known, the role of the floating-point system as standard has kept other number representation systems from coming into practice. The paper discusses the relation between fixed and floating-point numbers from a pragmatic point of view that allows to mix both systems to optimize FPGA-based hardware accelerators. The method is developed for the Mitrion "processor on demand" technology, where a computationally intensive algorithm is transformed into a dedicated. The large gap in cycle time between fixed and floating-point operations and between direct and reverse operations makes the on-chip control for the fine-grain pipelines of parallel logic very complicated. Having alternative hardware realizations available can alleviate this. The paper uses a conjunctive notation, also known as DIGILOG, to introduce a flexible means in creating configurable arithmetic of arbitrary order using a single module type. This allows the Mitrion hardware compiler to match the hardware closer to the demands of the specific algorithm. Typical applications are in molecular simulation and real-time image analysis

    Design Space Exploration for a DT-CNN

    No full text
    The paper presents design considerations for a digital Cellular Neural Network. The architectural spectrum is categorized from a rich variety in nodal structural transformations and formulated by an exploration model. This model is then applied to study the influence of the memory band-gap. Experiments show the benefits of a tiled implementation, mixing spatial network parallelism with time-multiplexed nodes based on a local register stack
    • ā€¦
    corecore